Goto

Collaborating Authors

 optical computing


A Hardware-Efficient Photonic Tensor Core: Accelerating Deep Neural Networks with Structured Compression

Ning, Shupeng, Zhu, Hanqing, Feng, Chenghao, Gu, Jiaqi, Pan, David Z., Chen, Ray T.

arXiv.org Artificial Intelligence

Recent advancements in artificial intelligence (AI) and deep neural networks (DNNs) have revolutionized numerous fields, enabling complex tasks by extracting intricate features from large datasets. However, the exponential growth in computational demands has outstripped the capabilities of traditional electrical hardware accelerators. Optical computing offers a promising alternative due to its inherent advantages of parallelism, high computational speed, and low power consumption. Yet, current photonic integrated circuits (PICs) designed for general matrix multiplication (GEMM) are constrained by large footprints, high costs of electro-optical (E-O) interfaces, and high control complexity, limiting their scalability. To overcome these challenges, we introduce a block-circulant photonic tensor core (CirPTC) for a structure-compressed optical neural network (StrC-ONN) architecture. By applying a structured compression strategy to weight matrices, StrC-ONN significantly reduces model parameters and hardware requirements while preserving the universal representability of networks and maintaining comparable expressivity. Additionally, we propose a hardware-aware training framework to compensate for on-chip nonidealities to improve model robustness and accuracy. We experimentally demonstrate image processing and classification tasks, achieving up to a 74.91% reduction in trainable parameters while maintaining competitive accuracies. Performance analysis expects a computational density of 5.84 tera operations per second (TOPS) per mm^2 and a power efficiency of 47.94 TOPS/W, marking a 6.87-times improvement achieved through the hardware-software co-design approach. By reducing both hardware requirements and control complexity across multiple dimensions, this work explores a new pathway to push the limits of optical computing in the pursuit of high efficiency and scalability.


Optical Computing for Deep Neural Network Acceleration: Foundations, Recent Developments, and Emerging Directions

Pasricha, Sudeep

arXiv.org Artificial Intelligence

Emerging artificial intelligence applications across the domains of computer vision, natural language processing, graph processing, and sequence prediction increasingly rely on deep neural networks (DNNs). These DNNs require significant compute and memory resources for training and inference. Traditional computing platforms such as CPUs, GPUs, and TPUs are struggling to keep up with the demands of the increasingly complex and diverse DNNs. Optical computing represents an exciting new paradigm for light-speed acceleration of DNN workloads. In this article, we discuss the fundamentals and state-of-the-art developments in optical computing, with an emphasis on DNN acceleration. Various promising approaches are described for engineering optical devices, enhancing optical circuits, and designing architectures that can adapt optical computing to a variety of DNN workloads. Novel techniques for hardware/software co-design that can intelligently tune and map DNN models to improve performance and energy-efficiency on optical computing platforms across high performance and resource constrained embedded, edge, and IoT platforms are also discussed. Lastly, several open problems and future directions for research in this domain are highlighted.


Software Testing, Artificial Intelligence and Machine Learning Trends in 2023

#artificialintelligence

In many ways, 2022 has been a watershed year for software; with the worst ravages of the pandemic behind us, we can see the temporal changes and which ones have become structural. As a result, companies that used software to build a sustainable long-term business that disrupted the pre-pandemic status quo have thrived. Yet, at the same time, those that were simply techno-fads will be consigned to the dustbin of history. The software testing industry has similarly been transformed by the changes in working practices and the criticality of software and IT to the world's existence, with the move to quality engineering practices and increased automation. At the same time, we're seeing significant advances in machine learning, artificial intelligence, and the large neural networks that make them possible.


Artificial intelligence and the rise of optical computing

#artificialintelligence

Modern information technology (IT) relies on division of labour. Photons carry data around the world and electrons process them. But, before optical fibres, electrons did both--and some people hope to complete the transition by having photons process data as well as carrying them. Your browser does not support the audio element. Unlike electrons, photons (which are electrically neutral) can cross each others' paths without interacting, so glass fibres can handle many simultaneous signals in a way that copper wires cannot.


Harnessing Noise in Optical Computing for AI - ELE Times

#artificialintelligence

Artificial intelligence and machine learning are currently affecting our lives in many small but impactful ways. For example, AI and machine learning applications recommend entertainment we might enjoy through streaming services such as Netflix and Spotify. In the near future, it's predicted that these technologies will have an even larger impact on society through activities such as driving fully autonomous vehicles, enabling complex scientific research and facilitating medical discoveries. But the computers used for AI and machine learning demand a lot of energy. Currently, the need for computing power related to these technologies is doubling roughly every three to four months.


Harnessing Noise In Optical Computing For AI - AI Summary

#artificialintelligence

In the near future, it's predicted that these technologies will have an even larger impact on society through activities such as driving fully autonomous vehicles, enabling complex scientific research and facilitating medical discoveries. And cloud computing data centers used by AI and machine learning applications worldwide are already devouring more electrical power per year than some small countries. A research team led by the University of Washington has developed new optical computing hardware for AI and machine learning that is faster and much more energy efficient than conventional electronics. Optical computing noise essentially comes from stray light particles, or photons, that originate from the operation of lasers within the device and background thermal radiation. Of course the optical computer didn't have a human hand for writing, so its form of "handwriting" was to generate digital images that had a style similar to the samples it had studied, but were not identical to them.


Harnessing noise in optical computing for AI

#artificialintelligence

Artificial intelligence and machine learning are currently affecting our lives in many small but impactful ways. For example, AI and machine learning applications recommend entertainment we might enjoy through streaming services such as Netflix and Spotify. In the near future, it's predicted that these technologies will have an even larger impact on society through activities such as driving fully autonomous vehicles, enabling complex scientific research and facilitating medical discoveries. But the computers used for AI and machine learning demand a lot of energy. Currently, the need for computing power related to these technologies is doubling roughly every three to four months.


Accelerating AI at the speed of light

#artificialintelligence

Improved computing power and an exponential increase in data have helped fuel the rapid rise of artificial intelligence. But as AI systems become more sophisticated, they'll need even more computational power to address their needs, which traditional computing hardware most likely won't be able to keep up with. To solve the problem, MIT spinout Lightelligence is developing the next generation of computing hardware. The Lightelligence solution makes use of the silicon fabrication platform used for traditional semiconductor chips, but in a novel way. Rather than building chips that use electricity to carry out computations, Lightelligence develops components powered by light that are low energy and fast, and they might just be the hardware we need to power the AI revolution.



The AI arms race spawns new hardware architectures

#artificialintelligence

As society turns to artificial intelligence to solve problems across ever more domains, we're seeing an arms race to create specialized hardware that can run deep learning models at higher speeds and lower power consumption. Some recent breakthroughs in this race include new chip architectures that perform computations in ways that are fundamentally different from what we've seen before. Looking at their capabilities gives us an idea of the kinds of AI applications we could see emerging over the next couple of years. Neural networks, composed of thousands and millions of small programs that perform simple calculations to perform complicated tasks such as detecting objects in images or converting speech to text are key to deep learning. But traditional computers are not optimized for neural network operations.